首页> 外文OA文献 >A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering
【2h】

A Novel Multi-Focus Image Fusion Method Based on Stochastic Coordinate Coding and Local Density Peaks Clustering

机译:一种基于随机坐标编码和局部密度峰聚类的多聚焦图像融合方法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

abstract: The multi-focus image fusion method is used in image processing to generate all-focus images that have large depth of field (DOF) based on original multi-focus images. Different approaches have been used in the spatial and transform domain to fuse multi-focus images. As one of the most popular image processing methods, dictionary-learning-based spare representation achieves great performance in multi-focus image fusion. Most of the existing dictionary-learning-based multi-focus image fusion methods directly use the whole source images for dictionary learning. However, it incurs a high error rate and high computation cost in dictionary learning process by using the whole source images. This paper proposes a novel stochastic coordinate coding-based image fusion framework integrated with local density peaks. The proposed multi-focus image fusion method consists of three steps. First, source images are split into small image patches, then the split image patches are classified into a few groups by local density peaks clustering. Next, the grouped image patches are used for sub-dictionary learning by stochastic coordinate coding. The trained sub-dictionaries are combined into a dictionary for sparse representation. Finally, the simultaneous orthogonal matching pursuit (SOMP) algorithm is used to carry out sparse representation. After the three steps, the obtained sparse coefficients are fused following the max L1-norm rule. The fused coefficients are inversely transformed to an image by using the learned dictionary. The results and analyses of comparison experiments demonstrate that fused images of the proposed method have higher qualities than existing state-of-the-art methods.
机译:摘要:多焦点图像融合方法在图像处理中用于基于原始多焦点图像生成具有大景深(DOF)的全焦点图像。在空间和变换域中已经使用了不同的方法来融合多焦点图像。作为最流行的图像处理方法之一,基于字典学习的备用表示在多焦点图像融合中获得了出色的性能。现有的大多数基于字典学习的多焦点图像融合方法大多直接使用整个源图像进行字典学习。但是,通过使用整个源图像,在字典学习过程中会导致较高的错误率和较高的计算成本。本文提出了一种新的基于局部坐标峰值的基于随机坐标编码的图像融合框架。所提出的多焦点图像融合方法包括三个步骤。首先,将源图像分割为小的图像块,然后通过局部密度峰聚类将分割的图像块分为几组。接下来,将分组的图像块通过随机坐标编码用于子词典学习。训练有素的子词典被合并成字典以进行稀疏表示。最后,采用同时正交匹配追踪(SOMP)算法进行稀疏表示。经过这三个步骤,将遵循最大L1-范数规则对获得的稀疏系数进行融合。通过使用学习的字典,将融合的系数逆变换为图像。对比实验的结果和分析表明,与现有的最新技术方法相比,该方法的融合图像具有更高的质量。

著录项

  • 作者

  • 作者单位
  • 年度 2016
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号